Goto

Collaborating Authors

 ethical obligation


We Shouldn't Try to Make Conscious Software--Until We Should

#artificialintelligence

Robots or advanced artificial intelligences that "wake up" and become conscious are a staple of thought experiments and science fiction. Whether or not this is actually possible remains a matter of great debate. All of this uncertainty puts us in an unfortunate position: we do not know how to make conscious machines, and (given current measurement techniques) we won't know if we have created one. At the same time, this issue is of great importance, because the existence of conscious machines would have dramatic ethical consequences. We cannot directly detect consciousness in computers and the software that runs on them, any more than we can in frogs and insects.


Ethics and Policy for Technology -- Joanna Bryson

#artificialintelligence

Artificial Intelligence (AI) and robots often seem like fun science fiction, but in fact already affect our daily lives. For example, services like Google and Amazon help us find what we want by using AI. Every aspect of how Facebook works is based on AI and Machine Learning (ML). The reason your phone is so useful is it is full of AI -- sensing, acting, and learning about you. All these tools not only make us smarter, their intelligence is based partly on what they learn both from us and about us when we use them.


Clinical Data Sharing for AI: Proposed Framework Could Rouse Debate - AI Trends

#artificialintelligence

A group of doctors from Stanford University has proposed a framework for sharing clinical data for artificial intelligence (AI) that could set off a firestorm of debate about who truly owns medical data, ethical obligations to share it, and how to properly police researchers who use it. On the other hand, the envisioned approach has parallels to the open science tactics currently being uniformly deployed to battle the COVID-19 pandemic. The framework's central premise is that clinical data should be treated as a public good when it is used for secondary purposes such as research or the development of AI algorithms, as detailed in a special report (doi: 10.1148/radiol.2020192536) That means broadening access to aggregated, de-identified clinical data, forbidding its sale and holding everyone who interacts with it accountable for protecting patient privacy, explains study lead author David B. Larson, M.D., M.B.A., vice chair of clinical operations for the radiology department at Stanford University School of Medicine. Although the framework published in a journal specific to radiology, and three of its authors are radiologists, the structure is "universally applicable to other types of medical data as well," says Larson.


Ethical Factors To Artificial Intelligence In The Legal Industry

#artificialintelligence

Artificial intelligence, commonly referred to as AI, has great potential to increase efficiency, accuracy, and cost savings within the legal industry. But the ability to make decisions autonomously without any human involvement has caused concern in some legal circles as to the ethical implications. Specifically, a literal artificial decision does not apply the same critical thinking, intuition, and professional judgement traditionally practiced by a seasoned lawyer. While the point is valid, it is important to first consider how AI is transforming routine tasks in legal firms. AI in a law practice would use rules based logic developed with attorneys to abbreviate labor intensive tasks, such as contract review, where items of concern would be brought to the attention of a practicing lawyer for review.


Just an Artifact: Why Machines are Perceived as Moral Agents

Bryson, Joanna J. (University of Bath) | Kime, Philip P. (Independent Researcher)

AAAI Conferences

How obliged can we be to AI, and how much danger does it pose us? A surprising proportion of our society holds exaggerated fears or hopes for AI, such as the fear of robot world conquest, or the hope that AI will indefinitely perpetuate our culture. These misapprehensions are symptomatic of a larger problem—a confusion about the nature and origins of ethics and its role in society. While AI technologies do pose promises and threats, these are not qualitatively different from those posed by other artifacts of our culture which are largely ignored: from factories to advertising, weapons to political systems. Ethical systems are based on notions of identity, and the exaggerated hopes and fears of AI derive from our cultures having not yet accommodated the fact that language and reasoning are no longer uniquely human. The experience of AI may improve our ethical intuitions and self-understanding, potentially helping our societies make better-informed decisions on serious ethical dilemmas.